Techniques and defenses from the MINJA, InjecMEM, and ToxicSkills campaigns that poison AI agents’ memory files, and the fact that GPT-5.3-Codex achieved a 72% exploit success rate on EVMbench released by OpenAI and Paradigm. This article organizes how AI becomes both a target of attacks and a weapon for attackers.